Embedding approximately low-dimensional $\ell_2^2$ metrics into $\ell_1$

نویسندگان

  • Amit Deshpande
  • Prahladh Harsha
  • Rakesh Venkat
چکیده

Goemans showed that any n points x1, . . . xn in d-dimensions satisfying l 2 2 triangle inequalities can be embedded into l1, with worst-case distortion at most √ d. We extend this to the case when the points are approximately low-dimensional, albeit with average distortion guarantees. More precisely, we give an l2-to-l1 embedding with average distortion at most the stable rank, sr (M), of the matrix M consisting of columns {xi − xj}i<j. Average distortion embedding suffices for applications such as the SPARSEST CUT problem. Our embedding gives an approximation algorithm for the SPARSEST CUT problem on low threshold-rank graphs, where earlier work was inspired by Lasserre SDP hierarchy, and improves on a previous result of the first and third author [Deshpande and Venkat, In Proc. 17th APPROX, 2014]. Our ideas give a new perspective on l2 metric, an alternate proof of Goemans’ theorem, and a simpler proof for average distortion √ d. Furthermore, while the seminal result of Arora, Rao and Vazirani giving a O( √ log n) guarantee for UNIFORM SPARSEST CUT can be seen to imply Goemans’ theorem with average distortion, our work opens up the possibility of proving such a result directly via a Goemans’-like theorem. ∗Microsoft Research India, [email protected] †Tata Institute of Fundamental Research, [email protected] ‡Tata Institute of Fundamental Research, [email protected] 1

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Embedding approximately low-dimensional ℓ22 metrics into ℓ1

Goemans showed that any n points x1, . . . xn in d-dimensions satisfying `2 triangle inequalities can be embedded into `1, with worst-case distortion at most √ d. We consider an extension of this theorem to the case when the points are approximately low-dimensional as opposed to exactly low-dimensional, and prove the following analogous theorem, albeit with average distortion guarantees: There ...

متن کامل

Optimality of $\ell_2/\ell_1$-optimization block-length dependent thresholds

The recent work of [4, 11] rigorously proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of non-zero elements of the unknown vector) also proportional to the length of the unknown vector such that l1-optimiza...

متن کامل

A Noise-Robust Method with Smoothed \ell_1/\ell_2 Regularization for Sparse Moving-Source Mapping

The method described here performs blind deconvolution of the beamforming output in the frequency domain. To provide accurate blind deconvolution, sparsity priors are introduced with a smooth l1/l2 regularization term. As the mean of the noise in the power spectrum domain is dependent on its variance in the time domain, the proposed method includes a variance estimation step, which allows more ...

متن کامل

One-bit compressed sensing with partial Gaussian circulant matrices

In this paper we consider memoryless one-bit compressed sensing with randomly subsampled Gaussian circulant matrices. We show that in a small sparsity regime and for small enough accuracy $\delta$, $m\sim \delta^{-4} s\log(N/s\delta)$ measurements suffice to reconstruct the direction of any $s$-sparse vector up to accuracy $\delta$ via an efficient program. We derive this result by proving that...

متن کامل

Convergence Analysis of the Dynamics of a Special Kind of Two-Layered Neural Networks with $\ell_1$ and $\ell_2$ Regularization

In this paper, we made an extension to the convergence analysis of the dynamics of two-layered bias-free networks with one ReLU output. We took into consideration two popular regularization terms: the `1 and `2 norm of the parameter vector w, and added it to the square loss function with coefficient λ/2. We proved that when λ is small, the weight vector w converges to the optimal solution ŵ (wi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015